33 research outputs found

    Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics

    Full text link
    Data from high-energy physics (HEP) experiments are collected with significant financial and human effort and are mostly unique. An inter-experimental study group on HEP data preservation and long-term analysis was convened as a panel of the International Committee for Future Accelerators (ICFA). The group was formed by large collider-based experiments and investigated the technical and organisational aspects of HEP data preservation. An intermediate report was released in November 2009 addressing the general issues of data preservation in HEP. This paper includes and extends the intermediate report. It provides an analysis of the research case for data preservation and a detailed description of the various projects at experiment, laboratory and international levels. In addition, the paper provides a concrete proposal for an international organisation in charge of the data management and policies in high-energy physics

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    25th International Conference on Computing in High Energy & Nuclear Physics

    No full text
    Computing for the Large Hadron Collider (LHC) at CERN arguably started shortly after the commencement of data taking at the previous machine – LEP – some would argue it was even before. Without specifying an exact date, it was certainly prior to when today’s large(st) collaborations, namely ATLAS and CMS, had formed and been approved and before the LHC itself was given the official go-ahead at the 100th meeting of the CERN Council in 1995. Approximately the first decade was spent doing research and development; the second – from the beginning of the new millennium – on grid exploration and hardening; and the third providing support to LHC data taking, production, analysis and most importantly obtaining results

    WLCG Collaboration Workshop (Tier0/Tier1/Tier2)

    No full text

    Collaborative Long-Term Data Preservation: From Hundreds of PB to Tens of EB

    No full text
    In 2012, the Study Group on Data Preservation in High Energy Physics (HEP) for Long—Term Analysis, more frequently know as DPHEP, published a Blueprint report detailing the motivation for, problems with and situation of data preservation across all of the main HEP laboratories worldwide. In September of that year, an open workshop was held in Krakow to prepare an update to the European Strategy for Particle Physics (ESPP) that was formally adopted by a special session of CERN’s Council in May 2013 in Brussels and key elements from the Blueprint were input to that discussion. A new update round to the ESPP has recently been launched and it is timely to consider the progress made since 2012/2013, list the outstanding problems and possible future directions for this work

    The Evolution of Databases in HEP

    No full text

    Grid today, clouds on the horizon

    No full text
    By the time of CCP 2008, the largest scientific machine in the world – the Large Hadron Collider – had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our “Higgs in one basket” – that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219–223]. After many years of preparation, 2008 saw a final “Common Computing Readiness Challenge” (CCRC'08) – aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America – is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of “Cloud Computing” are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term

    The State of Readiness of LHC Computing

    No full text

    Computing

    No full text
    corecore